This is my first book review, ever — and from the way I’m kicking off, maybe you could sniff out that I don’t really know how to do this, but neither do I care — I’ll just dump my thoughts here for myself if not anybody else. After finishing this book, I gathered some scrap paper and an old pencil and jotted down anything that came to mind while flipping through it again. I took my time in writing down my opinions on each chapter, each concept, and then while digitising all of it, spent more time cleaning up the disorganised and confusing gibberish I churned out so the product is, I hope, sufficiently reader-friendly.
I headed into the book bearing an optimistic attitude, eager to absorb insight into the tech scene and how they achieved what they have, while welcoming Foroohar’s educated commentary. And unsurprisingly, it didn’t disappoint — having digested the book, I’m now writing this review with remarkably more informed perspectives and opinions. I won’t follow the chapter order of the book, and I really shouldn’t, as the wealth of key ideas is dispersed across the many chapters due to their interlinked natures. Furthermore, rather counterproductively to the spirit of a ‘book review’, I plan to include information from other sources outside of the book, including online videos and articles. So, the author’s key points will be categorised into their respective sections for clarity, and in each topic I aim to provide a concise summary of the argument, including some of her sentiments I don’t always fully agree with, then put forward my own thoughts and ideas, if any are at least constructive.
In this book, Foroohar doesn’t pull any punches toward Big Tech and its leaders that dictate their respective industries. She provides countless examples to convey just how powerful and successful they are, which initially had me skeptical she fell into the inflating community of bitter and sactimonious technology critics that loved to complain — I’m glad to uncover my suspicions were largely false. The technology journalist, having perused extensive research and interviewed key experts and insiders, proved to know more than I could’ve imagined about Big Tech’s business models, history, and vast societal impact. She is meticulous in her coverage, rarely letting escape fragments of information or judgement worth noting. But that doesn’t mean I haven’t any additional or contrasting perceptions. Again, I’m not nearly as accomplished or seasoned as Foroohar is, but any reader with a passably analytical mind of his own should be granted this much — to critique.
There’s one feature the Biggest of Big Tech exploit in common, and that's data. Of course it’s data. Data as a resource is so unique and critical due to its ability and tendency to facilitate a self-serving cycle if utilised efficiently, a process so prominent it’s been branded The Network Effect. Here’s how it works. Take a Big Tech firm, say Google. Its ubiquity stems not from some genius in its algorithm — not to discredit their engineers — but from the tremendous amount of data it wields at its arsenal. This data, put through the algorithm, allows for precise search results that return useful and accurate information, links, and all sorts of details to a user doing a Google search. And with auto-complete, the user never even finished his sentence. While this user leaves contented, Google gains new information from recording how much he scrolled, what he clicked, how long he spent finding what he needed — more data, to add to its already immense collection. Now with better data, it knows more about users and this particular subject they searched up, and becomes better at organising results for the next user to optimise some metric, such as maybe how far down users scroll before clicking a link. It’s clear how this cyclical process grows Google’s pool of data, strategically evolving its algorithm to get better and better. And it’s also apparent how this develops a natural monopoly, as competitors with fewer users without such data are less efficient in providing results, and in turn attract yet fewer users — you get the gist. Essentially, all Google really needed was the reputation of being the go-to search engine, and once the ball’s rolling, eventually it was.
Somewhere within this pattern, Google has to make money and that’s where the advertisers come in. To simplify an obviously more rigorous process, Google puts up keywords for auction to an audience of relevant companies, and when these keywords are searched, ads by the highest bidders are shown on the screen, above the actually legitimate results. Different tech giants monetise data in their respective lucrative ways. Facebook, another problematic giant, uses the unique multifaceted data of individual users to produce user profiles to conduct targeted advertising to them, making bank from advertisers who can be confident Facebook has enough information of such uncanny detail to be able to show their products and services to those most likely interested.
I’m certain we’re on the same page by now, that data is pivotal to most of Big Tech’s business models. And for that reason, technology activists are calling for data to be classified as a taxable resource, and be declared and levied a tax just as monetary corporate profits are. Companies should have to officially declare the amounts and types of data they possess, alongside all the other incomes, capital and investment to authorities, with watchdogs, regular audits, the whole deal. At first glance this does kind of seem like a workable idea capable of rebalancing power out of Big Tech’s hands and back into the sea of competition. Data is a resource, there’s no arguing out of that one — heck, there’s a whole professional field catered to studying it to aid businesses. And the more you have of it, the better decisions you can make, which in Google’s and Facebook’s cases would be in choosing ads to force down chosen users’ throats. At anyone else’s disposal, this data remains immensely useful, meaning there’s nothing exceptional about the way these particular giants make use of it, but the self-feeding framework exploited by them is always going to prevent their competition from acquiring such amounts of data. And they want to keep it that way.
But how? Who gets taxed on data, and how much are they taxed? Everyone owns data, from multinational conglomerates to emerging startups, to a child’s roadside lemonade stand that alters its sweetness according to data from the feedback of patronizing customers. Where do we draw this impossible line to segregate intelligent, commendable business analytics from tyrannical monopolistic dominance? Perhaps we could just tax data based on how much financial profit it directly (or indirectly) contributes to — Google, Facebook, and the other notorious tech giants would then be levied heavily for their overwhelming revenues. It seems the best, most impartial method of taxing data is to progressively tax the firm’s profits arising from data. Sounds familiar, because I just described status quo: if you have a truckload of data and use it for profit, you’ll get taxed more on your greater profit, period. An additional, unique tax on data, of all resources that can lead to increased profits, would be a biased penalty on technologically advanced companies.
Data itself cannot be taxed, while data’s contribution to profits is already taxed. What we should be focusing on is identifying and penalising Big Tech’s questionable strategies, like deliberate anticompetitive practices and facilitation of addiction, that I will soon talk about. This distinction between punishment and taxation must be clear — firms should 100% be punished for immoral decisions, but using data to grow isn’t one. And if these giants are natural monopolies, we should also shift our focus away from forcefully inducing more competition (and consequently more economic inefficiency), and toward simply treating them less like private firms and closer to utilities or platforms, enforcing higher standards, and a greater duty of care.
“We have grown so accustomed to vast collection of our personal data and breaches of our privacy by both government agencies and private companies that new revelations no longer come as a surprise.” -Edward Frenkel, PhD, The Huffington Post
Foroohar spends a lot of the book sharing the alarming range of anticompetitive methods Big Tech firms have employed over the decades. She draws a parallel between these giants and railroad companies in the Gilded Age that demanded anything they wanted from users, otherwise refusing to serve them (here’s an insightful article). These men at the top yielded so much power in such a key industry that corruption and injustice were bound to occur. Now shifting our focus back to this century: when threatened by other search engines, Google struck a deal with one to take over their searching process (Yahoo), and filtered the other two out of its own search results (Yelp, Foundem). By doing Yahoo’s searches for them, Google received not only brand recognition, but more importantly the data acquired from the searching process itself. And by hiding results of Yelp and Foundem and showing users its own services more prominently above, Google effectively eliminated them from contention, eerily like an 1880’s railroad owner might reject service to a worker he wasn’t fond of. Amazon copies and rebrands books from small businesses then lists them with their own Amazon brand and guarantee that consumers trust, at a much discounted price. And unless Facebook could get all the data, they wouldn’t form partnerships with small games and programmes. These guys can’t get called out because they own too much, and they set the rules.
With their vast and intricate connections, Big Tech lobbied for the 2011 passing of the America Invents Act (AIA), which made it significantly more difficult to get a patent approved. It’s supposed to encourage transparent collaboration between entities for a better common future, but what it really achieved was to devalue intellectual property (IP), allowing Big Tech the ability and incentive to seize unpatented innovative ideas from vulnerable startups and researchers, then bully them out of competition. Facebook thought Snapchat’s unique element of non-permanent media sharing was cool, and turned around and introduced Instagram stories. Ironically, the AIA exacerbated the “innovation black hole” that plagues just about any modern industry: entrepreneurs, researchers and venture capitalists stay away from any field that Big Tech is likely to be interested in, instead opting to relocate to other regions where patents are accessible and their IP is respected.
People get upset about this, understandably, but then find that Big Tech is three steps ahead at every turn, with all their grounds covered. The public debate over regulation of powerful firms is orchestrated curiously by a few tech giants themselves. Google has, rather desipicably, funded several reputable and supposedly trustworthy research papers to favor against regulation, and contributed to various Non-Governmental Organisations (NGOs) and think tanks to butter them up in preparation for anytime Google needs some public backing.
Surprisingly, traditional governing bodies don’t see these firms as overly dominant, or their actions as anticompetitive. The US Department of Justice (DOJ) in 2012 ruled against a group of small book publishers in favor of Amazon, in a shocking verdict that the small publishers were instead the ones being anticompetitive by breaking off together from Amazon. This all but reveals the close-mindedness with which many authorities stubbornly remain on antitrust laws, to even fathom to come to such a conclusion. They focus on only one consideration: did consumers suffer? In Amazon’s case, consumers were swimming in discounted prices and anything but suffering, and so the DOJ was cool with it. Amazon’s stealing and predatory pricing to put out of business other sellers on its marketplace so it could get more and more power, the DOJ didn’t bother as much.
Above are just a few of the numerous ways Big Tech messes with others for their own gain, and how they miraculously haven’t been heavily regulated yet. And these, not data itself, are the activities so clearly wrong and anticompetitive, that have to be penalised and changed. A powerful private company cannot simultaneously own a service industry and also do business in it (think Google’s manipulation of its search algorithm, or Amazon’s predatory listings on its own website). With such agency and influence, they must be regulated like Platform Utilities, a concept popularised by US Senator Elizabeth Warren, to separate their platform services from their own conducts of business, and be kept fair and nondiscriminatory at all times by officials. And I believe that’s fair: you either have healthy competition and do what you want, or dominate the entire industry and be scrutinised with every move you make. But to be clear at this point, I still firmly believe in a fundamental difference between dominating a market as its natural monopoly and actively being anticompetitive. Big Tech does both, but only the latter is wrong.
Consulting firm Cambridge Analytica is now infamous for its involvement in the 2016 US Presidential Elections, for which it harvested data from millions of Facebook users and, without their consent or even awareness, used it against them in political advertisements in favor of Republican candidate Donald Trump. Based on users’ personality traits, different advertisements were targeted toward them: neutrals were fed pro-Trump or anti-Clinton propaganda, while strong leftists were discouraged from turning out to vote altogether.
How did Facebook get tangled up in all this? It acted as the platform on which the scandal took place. But the US$5 billion fine for its involvement puzzles me. Personally, I don’t understand the concentrated bitterness toward Facebook, instead of the masterminds behind the data weaponization, or the actors behind the actual malicious advertisements. Could Facebook have been more cautious and made conscious efforts to better protect its users’ data? Probably. But is it really Facebook’s duty to revamp its business model, built on advertisement and usage of data, just to prevent something that might happen? Facebook was obviously the enabler, and central to the manipulation plans, so actions were rightfully taken to prevent a repeat in the future. However, I simply don’t see how they should have been boycotted and fined that heavily for a crime another party committed using its functions.
I also want to discuss another event with sort of the same theme as that above. The Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) were passed in 2018, making it illegal for platforms to knowingly facilitate sex trafficking. Noble initiative, but not the brightest, in my opinion. It might feel just a little unfair to these platform companies that will have to cater resources to meticulously track what everyone posts, but nobody else is going to do it for them, and as (not exactly) public utilities, it might just make sense. Something is more problematic however: without perfect knowledge of which content could be hinting at sex trafficking and which isn’t, they would simply remove everything remotely related to it, now that they’re legally accountable. It’s natural to begin feeling uneasy for the people’s freedom of speech. The repercussions don’t end there, as the Acts’ vague definitions of sex trafficking end up marginalizing perfectly legal sex workers even more than before. For these reasons above and more, FOSTA-SESTA faced criticism from the public, which I fully stand by.
I don’t understand why Facebook’s penalty on the other hand was so welcomed. Perhaps it, ironically (I’ll get to the irony in the next paragraph), depends on which issue makes the best headlines: “Facebook Impedes Democracy” is prettier than “Facebook Wasn’t Responsible”, while “Sex Workers Marginalized” beats “Trump Passed Splendid Acts”. I must be missing a few nuanced differences between the two cases, but they’re similar enough for me to question the fluctuating general attitude toward the platform companies when someone else misbehaves on it.
Speaking of making good headlines, I have to mention the filter bubble. You get trapped in one when the algorithm of a service, programmed to keep you online as long as possible, recommends new content based on what they think you’d like, according to data of what you’ve watched. By clicking on one cute cat video, you prompt YouTube to push to you more videos of kittens and furry animals in the future. This becomes a problem when a kid (or Kyrie Irving) spirals down a rabbit hole of flat earth theory videos, or gets radicalised by extremist propaganda, without getting the chance to consider alternative perspectives for a balanced picture. Facebook filters users’ feeds based on likelihood of users clicking the posts or articles, and since incendiary messages are more sensational, and fake news more eye-catching, screw it, they’ll just feature those most conspicuously. People are getting more distinctly leftist and rightist as they’re fed one-sided information and never the other, and anyone claiming Big Tech doesn’t know is blatantly lying. Big Tech knows — it just doesn’t care.
If Big Tech did care, though, what can they change? They could give clear disclaimers that content is being filtered away, or provide the option to turn off content personalisation, but neither half-hearted method is really going to effect anything. The only way I see this going away is via a thorough overhaul of the business model that does anything to maximise clicks and views.
“Of all the states of mind that companies and brands seek to induce, addiction is by far the most desirable.”
Foroohar seems particularly upset about this one. She uses an anecdote of her young son whose addiction to his mobile devices led her to realise the severity of Big Tech’s social erosion. There isn’t much news for you here, because I’m sure everyone knows social media apps and video games are designed to keep you on them, so they can use your attention to collect more data and sell more ads. However, not everyone knows just how much effort organisations put into making users entirely hooked. Tech giants work with psychology experts to research methods that keep their users as addicted as possible and consistently returning for more. The addicted begin to lose the ability to focus on other subjects, shrink their attention span, among other consequences that uncannily mirror those of drug addicts. It’s in the very fabric of government to look out for society by regulating otherwise profit-centered players. Carbon emissions are taxed. Drugs are banned or heavily controlled. When it comes to addictive technology, suddenly, not so much.
At this point it’s clear that Big Tech doesn’t give a damn about its users’ well-being. The only topic left to ponder about is how accountable they should be held. Is it really wrong of a tech firm to seek users’ extensive subscription to its services, so as to boost profits? Foroohar thinks it is, but as I try to consider further, the answer becomes less clear-cut. Not everyone conducts research that immediately targets human vulnerabilities — some apps and games might just be very engrossing. How do we differentiate between good design of products and evil engineering of addiction? Will we just punish merit, by penalising all the services people get addicted to? Addiction is a scale, and the grey area is just too hazy. Barring a revamp of the entire internet where products and services remove all addictive qualities and act solely in consumers’ interests, we can only hope to catch the ones huddled in the lab trying to come up with the best way to have their victims itching for more.
Overall, this book has been enlightening in various ways by comprehensively discussing Big Tech’s questionable decisions over the years. A couple of issues studied thoroughly by Foroohar had been on my mind before, but without enough depth and clarity before this read. The difficulty I (and Foroohar, probably) had when attempting to sort the multitude of issues into discrete chapters proved how interconnected these problems were. Foroohar also offered her fair share of solutions, many of which I think are either too idealistic or wouldn’t be effective enough, if at all. Then again, I guess if anyone had a better idea we wouldn’t be in this pickle today.
And it’s done! That was fun, it really was. Now it’s time for me to heed Foroohar’s call and get off my computer. (Rana Foroohar ended her book on a corny joke, and so will I.)